skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hawkins, Robert D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The molecular mechanisms underlying age‐related declines in learning and long‐term memory are still not fully understood. To address this gap, our study focused on investigating the transcriptional landscape of a singularly identified motor neuron L7 in Aplysia, which is pivotal in a specific type of nonassociative learning known as sensitization of the siphon‐withdraw reflex. Employing total RNAseq analysis on a single isolated L7 motor neuron after short‐term or long‐term sensitization (LTS) training of Aplysia at 8, 10, and 12 months (representing mature, late mature, and senescent stages), we uncovered aberrant changes in transcriptional plasticity during the aging process. Our findings specifically highlight changes in the expression of messenger RNAs (mRNAs) that encode transcription factors, translation regulators, RNA methylation participants, and contributors to cytoskeletal rearrangements during learning and long noncoding RNAs (lncRNAs). Furthermore, our comparative gene expression analysis identified distinct transcriptional alterations in two other neurons, namely the motor neuron L11 and the giant cholinergic neuron R2, whose roles in LTS are not yet fully elucidated. Taken together, our analyses underscore cell type‐specific impairments in the expression of key components related to learning and memory within the transcriptome as organisms age, shedding light on the complex molecular mechanisms driving cognitive decline during aging. 
    more » « less
  2. Abstract How do drawings—ranging from detailed illustrations to schematic diagrams—reliably convey meaning? Do viewers understand drawings based on how strongly they resemble an entity (i.e., as images) or based on socially mediated conventions (i.e., as symbols)? Here we evaluate a cognitive account of pictorial meaning in which visual and social information jointly support visual communication. Pairs of participants used drawings to repeatedly communicate the identity of a target object among multiple distractor objects. We manipulated social cues across three experiments and a full replication, finding that participants developed object-specific and interaction-specific strategies for communicating more efficiently over time, beyond what task practice or a resemblance-based account alone could explain. Leveraging model-based image analyses and crowdsourced annotations, we further determined that drawings did not drift toward “arbitrariness,” as predicted by a pure convention-based account, but preserved visually diagnostic features. Taken together, these findings advance psychological theories of how successful graphical conventions emerge. 
    more » « less
  3. null (Ed.)
    Speakers use different language to communicate with partners in different communities. But how do we learn and represent which conventions to use with which partners? In this paper, we argue that solving this challenging computational problem requires speakers to supplement their lexical representations with knowledge of social group structure. We formalize this idea by extending a recent hierarchical Bayesian model of convention formation with an intermediate layer explicitly representing the latent communities each partner belongs to, and derive predictions about how conventions formed within a group ought to extend to new in-group and out-group members. We then present evidence from two behavioral experiments testing these predictions using a minimal group paradigm. Taken together, our findings provide a first step toward a formal framework for understanding the interplay between language use and social group knowledge. 
    more » « less
  4. null (Ed.)
    Many real-world tasks require agents to coordinate their behavior to achieve shared goals. Successful collaboration requires not only adopting the same communicative conventions, but also grounding these conventions in the same task-appropriate conceptual abstractions. We investigate how humans use natural language to collaboratively solve physical assembly problems more effectively over time. Human participants were paired up in an online environment to reconstruct scenes containing two block towers. One participant could see the target towers, and sent assembly instructions for the other participant to reconstruct. Participants provided increasingly concise instructions across repeated attempts on each pair of towers, using more abstract referring expressions that captured each scene's hierarchical structure. To explain these findings, we extend recent probabilistic models of ad hoc convention formation with an explicit perceptual learning mechanism. These results shed light on the inductive biases that enable intelligent agents to coordinate upon shared procedural abstractions. 
    more » « less
  5. null (Ed.)
    We explore unconstrained natural language feedback as a learning signal for artificial agents. Humans use rich and varied language to teach, yet most prior work on interactive learning from language assumes a particular form of input (e.g., commands). We propose a general framework which does not make this assumption, instead using aspect-based sentiment analysis to decompose feedback into sentiment over the features of a Markov decision process. We then infer the teacher's reward function by regressing the sentiment on the features, an analogue of inverse reinforcement learning. To evaluate our approach, we first collect a corpus of teaching behavior in a cooperative task where both teacher and learner are human. We implement three artificial learners: sentiment-based "literal" and "pragmatic" models, and an inference network trained end-to-end to predict rewards. We then re-run our initial experiment, pairing human teachers with these artificial learners. All three models successfully learn from interactive human feedback. The inference network approaches the performance of the "literal" sentiment model, while the "pragmatic" model nears human performance. Our work provides insight into the information structure of naturalistic linguistic feedback as well as methods to leverage it for reinforcement learning. 
    more » « less
  6. null (Ed.)
    Languages typically provide more than one grammatical construction to express certain types of messages. A speaker’s choice of construction is known to depend on multiple factors, including the choice of main verb – a phenomenon known as verb bias. Here we introduce DAIS, a large benchmark dataset containing 50K human judgments for 5K distinct sentence pairs in the English dative alternation. This dataset includes 200 unique verbs and systematically varies the definiteness and length of arguments. We use this dataset, as well as an existing corpus of naturally occurring data, to evaluate how well recent neural language models capture human preferences. Results show that larger models perform better than smaller models, and transformer architectures (e.g. GPT-2) tend to out-perform recurrent architectures (e.g. LSTMs) even under comparable parameter and training settings. Additional analyses of internal feature representations suggest that transformers may better integrate specific lexical information with grammatical constructions. 
    more » « less